
Discover Our Programs
Join our diverse community of top-tier students playing active roles in their local and global communities.
View All ProgramsResearch and Innovation
In Numbers
210,000+
Citations across HBKU
8300+
Publications across HBKU
$5.9
Million dollar funds in AY22/23
346
Copyrights Protection (grants/patents)
Newsroom

Delegating Your Thinking to ChatGPT: Are You Becoming Too Comfortable with AI to Think for Yourself?
When ChatGPT suffered an unexpected global outage in mid-2025, it was widely covered by news media outlets, and social media erupted seemingly with panic. Memes showcasing the impact of the outage quickly filled social media channels. People from all walks of life, students, startup founders, marketing teams, media outlets, and researchers, were scrambling to find alternatives.
With terms such as 'AI', 'LLMs', and 'GenAI' being used daily, the outage highlighted the widespread reliance on ChatGPT, with many users expressing their dependence on AI for daily tasks and work. This incident shows how generative AI is increasingly integrated into daily life and the challenges of maintaining uptime and reliability for such rapidly advancing platforms. The abrupt silence of this always-on AI assistant prompted not just frustration, but anxiety - what now?
What now, indeed? This momentary loss of access (the outage lasted about twelve hours) showed us something more than a temporary inconvenience. It underscored our growing dependence on AI apps for so many mental tasks. ChatGPT didn't just go offline; it disrupted our cognitive routine.
This dependence raises an interesting question: Has ChatGPT made us too mentally lazy to think for ourselves?
When Convenience Becomes a Crutch
It's hard to overstate the usefulness of generative AI. With the ability to summarize complex documents, translate languages, copy edit text, generate code, or suggest recipes, or recommend travel plans, ChatGPT and other large language models have become a digital 'Swiss Army knife' for the modern knowledge age. However, the same qualities that make AI so convenient also make it subtly corrosive, perhaps eroding our mental muscles.
We no longer need to fumble through first drafts looking at that 'blank screen' brainstorm in frustration, or remember complex formulas. Why struggle with the blank page when an algorithm can give us a polished page in seconds? Often, better than your first draft!
AI tools can augment your work and push your creativity. Some have voiced concern that the line between 'assistant' and 'replacement' is now blurred. If you hand off more and more mental labor to AI, you risk outsourcing not just your tasks but your thinking.
Dumbing Down in the Name of Efficiency
Cognitive science and organizational behavior research warn of the dangers of 'cognitive offloading', relying on external aids to perform mental tasks. When used habitually, large language models like ChatGPT might lead to a measurable decline in memory retention, critical thinking, and problem-solving (current indications are that they do).
Some research reports that teams relying heavily on AI to support decision-making are more prone to groupthink, less likely to challenge bad ideas, and demonstrate reduced creativity compared to control groups with limited reliance on AI. AI makes you more efficient and more productive in terms of producing outputs. Still, it does not necessarily make you smarter and can have negative consequences. In other words, AI can make you cognitively lazy.
This trend is particularly alarming in educational settings, where students (and, perhaps even more disturbing, teachers) frequently use AI tools to generate essays, solve math problems, and even engage in online class discussions. Students even use AI to answer ‘Tell me about yourself?.’ AI is an excellent shortcut for many tasks. However, using AI can become a habit that, over time, causes a skill to be lost (or not developed in the first place). You use it or lose it!
The Politeness Trap
Another hidden hazard lies in how ChatGPT and other LLMs communicate. Its tone is famously helpful, polite, and upbeat, always aiming to please. While this makes for pleasant interactions, it may also cultivate a bias in processing information. When everything is framed positively, politely, with minimal pushback, we risk losing touch with realism, dissent, and complexity – life's impolite sides. It is fine for GenAI systems to be polite. It is fine for GenAI to be culturally aware. It is not fine for GenAI to always tell you what you want to hear in a way that you want to hear.
Good ideas are often challenged in real life, harsh truths are told, and not every decision comes packaged in friendly encouragement. Life is often not easy. Suppose our dominant thinking partner is a machine trained to avoid offense and uncertainty. In that case, we may become too accustomed to affirmation and ill-prepared for disagreement. Again, use it or lose it!
Critical Thinking in the Age of AI
This is not a call to delete ChatGPT or ban AI tools from your personal life, schools, or workplaces. The ‘genie is out of the bottle’ and is not returning. However, we must shift from passive consumption to active engagement. Managers in many organizations now routinely check for AI-generated content from subordinates - while at the same time integrating AI into nearly every workflow. Generative AI should be a thinking partner, not a thinking replacement.
For example, using ChatGPT to spark a first draft or explore a new idea is vastly different from unquestioningly accepting the output of that first draft. Asking, ‘What would I write if I couldn't ask ChatGPT?’ is a good exercise for staying mentally engaged. Actually, doing the first draft yourself is excellent. Cross-referencing AI-generated answers with your human expertise and independent research is also a good mental habit when using AI. These exercises will help you control the storyline and communicate your idea, not that of the AI app.
Qatar and the Ethical Conversation
Some of these themes will be center stage at the AI Ethics: The Convergence of Technology and Diverse Moral Traditions 2025 conference in Doha, which will explore the implications of AI on society, learning, and human agency. As Qatar positions itself as a leader in ethically and culturally mindful AI development, via its universities, research centers, businesses, and non-profits, these conversations are timely.
With a history of rapid adoption of cutting-edge technologies in many sectors, including education, transportation, energy, and tourism, Qatar and the other countries of the Gulf region have an opportunity to lead in innovation and critical reflection. How do we develop AI systems that support, not replace, human thought, judgment, and responsibility?
Rediscovering the Joy of Thinking
Thinking is sometimes hard, and it is also what makes us human. The messy process of trial and error, the internal debates, the creative bursts followed by doubt. These are valuable experiences of the human condition. Handing the burden of thinking to an always-ready assistant is tempting. However, when you do, you lose more than effort. You risk losing a piece of yourselves. You die a little mentally.
The ChatGPT outage and other 'AI blackouts' are wake-up calls, showing how dependent we have become on AI in our daily mental life. However, the 'AI blackout' also gave us a brief period to think for ourselves.
As the age of AI progresses, do not forget the value of your mind. Use AI not as a crutch, but as a tool, a tool that supports, refines, and challenges, but does not supplant the act of human thought.
It is what makes us 'human'.
Dr. Jim Jansen is a principal scientist at Hamad Bin Khalifa University’s (HBKU) Qatar Computing Research Institute QCRI.
This piece has been submitted by HBKU’s Communications Directorate on behalf of its author. The thoughts and views expressed are the author’s own and do not necessarily reflect an official University stance.
Life at HBKU
Our community embodies excellence and fosters the guiding values of leadership, innovation and discovery.
+
+